Advanced Feature Normalization and Rapid Model Adaptation for Robust In- Vehicle Speech Recognition

نویسندگان

  • Seong-Jun Hahm
  • Hynek Bořil
  • Pongtep Angkititrakul
  • John H.L. Hansen
چکیده

In this study, we present advanced feature normalization and rapid model adaptation for robust in-vehicle speech recognition. For feature normalization, we use a combination of recently established quantile-based cepstral dynamics normalization (QCN) and low pass temporal filtering (RASTALP). Similar to cepstral mean normalization (CMN), QCN aims at alleviating the mismatch between ASR acoustic models and the decoded speech signal. QCN relaxes CMN assumptions concerning feature distributions, making the normalization more stable in varying adverse environments. RASTALP is a low-pass approximation of RASTA filtering which significantly reduces transient distortions introduced by the original band-pass filter. Using the normalized features, we adapt the speaker-independent acoustic model to specific speakers. The adaptation method is based on an aspect model (a “mixture-ofmixtures” model). To enable adaptation requiring only extremely small amounts of adaptation data (i.e., a few seconds), we train a small number of mixture models which can be interpreted as models of probabilistic “speaker clusters” for in-vehicle environments. In this work, we use fMLLR to represent individual speaker models. The speaker models are mixed using weights determined from adaptation data. Experimental results show that the normalization employing QCN-RASTALP is consistently superior to CMN. We also observe that in contrast to the conventional methods, the adaptation based on the aspect model improves word error rates for the in-vehicle noise environments.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improving the performance of MFCC for Persian robust speech recognition

The Mel Frequency cepstral coefficients are the most widely used feature in speech recognition but they are very sensitive to noise. In this paper to achieve a satisfactorily performance in Automatic Speech Recognition (ASR) applications we introduce a noise robust new set of MFCC vector estimated through following steps. First, spectral mean normalization is a pre-processing which applies to t...

متن کامل

روشی جدید در بازشناسی مقاوم گفتار مبتنی بر دادگان مفقود با استفاده از شبکه عصبی دوسویه

Performance of speech recognition systems is greatly reduced when speech corrupted by noise. One common method for robust speech recognition systems is missing feature methods. In this way, the components in time - frequency representation of signal (Spectrogram) that present low signal to noise ratio (SNR), are tagged as missing and deleted then replaced by remained components and statistical ...

متن کامل

Advanced front-end for robust speech recognition in extremely adverse environments

In this paper, a unified approach to speech enhancement, feature extraction and feature normalization for speech recognition in adverse recording conditions is presented. The proposed frontend system consists of several different, independent, processing modules. Each of the algorithms contained in these modules has been independently applied to the problem of speech recognition in noise, signi...

متن کامل

Combination of SPLICE and Feature Normalization for Noise Robust Speech Recognition

It is well-known that the performance of automatic speech recognition (ASR) systems are easily affected by acoustic mismatch between training and testing conditions. This mismatch is often caused by various kinds of environmental noise or distortion. To reduce the effect of mismatch, feature normalization, feature enhancement, model adaptation, etc. have been studied intensively. Cepstral mean ...

متن کامل

شبکه عصبی پیچشی با پنجره‌های قابل تطبیق برای بازشناسی گفتار

Although, speech recognition systems are widely used and their accuracies are continuously increased, there is a considerable performance gap between their accuracies and human recognition ability. This is partially due to high speaker variations in speech signal. Deep neural networks are among the best tools for acoustic modeling. Recently, using hybrid deep neural network and hidden Markov mo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013